22 research outputs found

    Machine ethics via logic programming

    Get PDF
    Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.Fundação para a Ciência e a Tecnologia (FCT)-grant SFRH/BD/72795/2010 ; CENTRIA and DI/FCT/UNL for the supplementary fundin

    PS-LTL for constraint-based security protocol analysis

    Get PDF
    Several formal approaches have been proposed to analyse security protocols, e.g. [2,7,11,1,6,12]. Recently, a great interest has been growing on the use of constraint solving approach. Initially proposed by Millen and Shmatikov [9], this approach allows analysis of a finite number of protocol sessions. Yet, the representation of protocol runs by symbolic traces (as opposed to concrete traces) captures the possibility of having unbounded message space, allowing analysis over an infinite state space. A constraint is defined as a pair consisting of a message M and a set of messages K that represents the intruder¿s knowledge. Millen and Shmatikov present a procedure to solve a set of constraints, i.e. that in each constraint, M can be built from K. When a set of constraints is solved, then a concrete trace representing an attack over the protocol can be extracted. \ud Corin and Etalle [4] has improved the work of Millen and Shmatikov by presenting a more efficient procedure. However, none of these constraint-based systems provide enough flexibility and expresiveness in specifying security properties. For example, to check secrecy an artificial protocol role is added to simulate whether a secret can be learned by an intruder. Authentication cannot also be checked directly. Moreover, only a built-in notion of authentication is implemented by Millen and Shmatikov in his Prolog implementation [10]. This problem motivates our current work. \ud A logical formalism is considered to be an appropriate solution to improve the flexibility and expresiveness in specifying security properties. A preliminary attempt to use logic for specifying local security properties in a constraint-based setting has been carried out [3]. Inspired by this work and the successful NPATRL [11,8], we currently explores a variant of linear temporal logic (LTL) over finite traces, -LTL, standing for pure-past security LTL [5]. In contrast to standard LTL, this logic deals only with past events in a trace. In our current work, a protocol is modelled as in previous works [9,4,3], viz. by protocol roles. A protocol role is a sequence of send and receive events, together with status events to indicate, e.g. that a protocol role has completed her protocol run. A scenario is then used to deal with the number of sessions and protocol roles considered in the analysis. \ud Integrating -LTL into our constraint solving approach presents a challenge, since we need to develop a sound and complete decision procedure against symbolic traces, instead of concrete traces. Our idea to address this problem is by concretizing symbolic traces incrementally while deciding a formula. Basically, the decision procedure consists of two steps: transform and decide. The former step transforms a -LTL formula with respect to the current trace into a so-called elementary formula that is built from constraints and equalities using logical connectives and quantifiers. The decision is then performed by the latter step through solving the constraints and checking the equalities. \ud Although we define a decision procedure for a fragment of -LTL, this fragment is expressive enough to specify several security properties, like various notions of secrecy and authentication, and also data freshness. We provide a Prolog implementation and have analysed several security protocols. \ud There are many directions for improvement. From the implementation point of view, the efficiency of the decision procedure can still be improved. I would also like to investigate the expressiveness of the logic for speficying other security properties. This may result in an extension of the decision procedure for a larger fragment of the logic. Another direction is to characterize the expressivity power of -LTL compared to other security requirement languages

    TABLING WITH INTERNED TERMS ON CONTEXTUAL ABDUCTION

    Get PDF
    Abduction (also called abductive reasoning) is a form of logical inference which starts with an observation and is followed by finding the best explanations. In this paper, we improve the tabling in contextual abduction technique with an advanced tabling feature of XSB Prolog, namely tabling with interned terms. This feature enables us to store the abductive solutions as interned ground terms in a global area only once so that the use of table space to store abductive solutions becomes more efficient. We implemented this improvement to a prototype, called as TABDUAL+INT. Although the experiment result shows that tabling with interned terms is relatively slower than tabling without interned terms when used to return first solutions from a subgoal, tabling with interned terms is relatively faster than tabling without interned terms when used to returns all solutions from a subgoal. Furthermore, tabling with interned terms is more efficient in table space used when performing abduction both in artificial and real world case, compared to tabling without interned terms

    Tabling with Interned Terms on Contextual Abduction

    Get PDF
    Abduction (also called abductive reasoning) is a form of logical inference which starts with an observation and is followed by finding the best explanations. In this paper, we improve the tabling in contextual abduction technique with an advanced tabling feature of XSB Prolog, namely tabling with interned terms. This feature enables us to store the abductive solutions as interned ground terms in a global area only once so that the use of table space to store abductive solutions becomes more efficient. We implemented this improvement to a prototype, called as TABDUAL+INT. Although the experiment result shows that tabling with interned terms is relatively slower than tabling without interned terms when used to return first solutions from a subgoal, tabling with interned terms is relatively faster than tabling without interned terms when used to returns all solutions from a subgoal. Furthermore, tabling with interned terms is more efficient in table space used when performing abduction both in artificial and real world case, compared to tabling without interned terms

    Surrogate Model-based Multi-Objective Optimization in Early Stages of Ship Design

    No full text
    The abstract isthe early stages of ship design, the decision of the ship's main dimensions significantly impacts the ship's performance and the total cost of ownership. This paper focuses on an optimization approach based on surrogate models at the early stages of ship design. The objectives are to minimize power requirements and building costs while still satisfying the constraints. We compare three approaches of surrogate models: Kriging, BPNN-PSO (Backpropagation Neural Network-Particle Swarm Optimizer), and MLP (Multi-Layer Perceptron) in two multi-objective optimization algorithms: MOEA/D (Multi-Objective Evolutionary Algorithm Decomposition) and NSGA-II (Non-Dominated Sorting Genetic Algorithm II). The experimental results show that MLP surrogate models get the best performance with MAE 6.03, and BPNN-PSO gets the second position with MAE 7.2. BPNN-PSO and MLP with MOEA/D and NSGA-II improve the design with around 58% smaller adequate power and 6% less steel weight than the original design. However, BPNN-PSO and MLP have lower hypervolume than Kriging for both optimization algorithms MOEA/D and NSGA-II. On the other hand, Kriging has the most inadequate model accuracy performance, with an MAE of 22.2, but produces the highest hypervolume, lowest computational time, and far lower objective values than BPNN-PSO and MLP for both optimization algorithms, MOEA/D and NSGA-II. Nevertheless, the three surrogate model approaches can significantly improve ship design solutions and reduce work time in the early stages of design.The abstract isthe early stages of ship design, the decision of the ship's main dimensions significantly impacts the ship's performance and the total cost of ownership. This paper focuses on an optimization approach based on surrogate models at the early stages of ship design. The objectives are to minimize power requirements and building costs while still satisfying the constraints. We compare three approaches of surrogate models: Kriging, BPNN-PSO (Backpropagation Neural Network-Particle Swarm Optimizer), and MLP (Multi-Layer Perceptron) in two multi-objective optimization algorithms: MOEA/D (Multi-Objective Evolutionary Algorithm Decomposition) and NSGA-II (Non-Dominated Sorting Genetic Algorithm II). The experimental results show that MLP surrogate models get the best performance with MAE 6.03, and BPNN-PSO gets the second position with MAE 7.2. BPNN-PSO and MLP with MOEA/D and NSGA-II improve the design with around 58% smaller adequate power and 6% less steel weight than the original design. However, BPNN-PSO and MLP have lower hypervolume than Kriging for both optimization algorithms MOEA/D and NSGA-II. On the other hand, Kriging has the most inadequate model accuracy performance, with an MAE of 22.2, but produces the highest hypervolume, lowest computational time, and far lower objective values than BPNN-PSO and MLP for both optimization algorithms, MOEA/D and NSGA-II. Nevertheless, the three surrogate model approaches can significantly improve ship design solutions and reduce work time in the early stages of design

    Modelling Morality with Prospective Logic

    No full text
    This paper shows how moral decisions can be drawn computationally by using prospective logic programs. These are employed to model moral dilemmas, as they are able to prospectively look ahead at the consequences of hypothetical moral judgments. With this knowledge of consequences, moral rules are then used to decide the appropriate moral judgments. The whole moral reasoning is achieved via a priori constraints and a posteriori preferences on abductive stable models, two features available in prospective logic programming. In this work we model various moral dilemmas taken from the classic trolley problem and employ the principle of double effect as the moral rule. Our experiments show that preferred moral decisions, i.e. those following the principle of double effect, are successfully delivered. Additionally we consider another moral principle, the principle of triple effect, in our implementation. We show that our prospective logic programs allow us to explain computationally different moral judgments that are drawn from these two slightly but distinctively different moral principles

    Incremental Inductive Learning of Answer Set Programs for Maps Generation Problems

    No full text
    In game development, Procedural Content Generation is an approach that replaces the designer's task in creating contents of games, e.g., game maps. We introduce an incremental learning process that utilizes Inductive Learning (IL) of Answer Set Programs (ASP) to automate solving maps generation problems rather than to explicitly specify the characteristics of the maps. In an incremental learning process, a complex learning task is divided into a sequence of learning iterations, where each iteration consists of a set of smaller learning tasks to learn a set of rules. In order to speed up the learning process, each task in the same iteration is solved asynchronously. Our experiments show that IL of ASP successfully learns an answer set program. That is, it provides a set of rules for generating a collection of game maps that possess the same characteristics as the maps referred in the learning scenario

    Towards Improving the Resource Usage of SAT solvers

    No full text
    The paper presents our work on cache analysis of SAT-solving. The aim is to study how resources are utilized by a SAT solver and to use this knowledge to improve the resource usage in SAT-solving. The analysis is performed mainly on our CDCL-based SAT solver and additionally on MiniSAT and PrecoSAT. The measurement is conducted using samplebased profiling on some industrial benchmark from the SAT competition 2009. During the measurement the following hardware events are traced: total cycles, stall cycles, L2 cache hits and L2 cache misses. From the measurement results, our runtime and implementation analysis unveil that several improvements on resource usage can be done, in particular on data structures and memory access. These improvements result in about 60 % speedup of runtime performance for our solver.

    Moral decision making with ACORDA

    No full text
    Abstract. This paper shows how moral decisions can be drawn computationally by using ACORDA, a working implementation of prospective logic programming. ACORDA is employed to model moral dilemmas, as they are able to prospectively look ahead at the consequences of hypothetical moral judgments. With this knowledge of consequences, moral rules are then used to decide the appropriate moral judgments. The whole moral reasoning is achieved via a priori constraints and a posteriori preferences on abductive stable models, two features available in ACORDA.
    corecore